[jira] [Commented] (KAFKA-5746) Add new metrics to support health checks

2017-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187238#comment-16187238
 ] 

ASF GitHub Bot commented on KAFKA-5746:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3996


> Add new metrics to support health checks
> 
>
> Key: KAFKA-5746
> URL: https://issues.apache.org/jira/browse/KAFKA-5746
> Project: Kafka
>  Issue Type: New Feature
>  Components: metrics
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 1.0.0
>
>
> It will be useful to have some additional metrics to support health checks.
> Details are in 
> [KIP-188|https://cwiki.apache.org/confluence/display/KAFKA/KIP-188+-+Add+new+metrics+to+support+health+checks]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5998) /.checkpoint.tmp Not found exception

2017-09-30 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated KAFKA-5998:
--
Attachment: 5998.v1.txt

{code}
Files.move(source, target, StandardCopyOption.REPLACE_EXISTING);
log.debug("Non-atomic move of {} to {} succeeded after atomic 
move failed due to {}", source, target,
outer.getMessage());
} catch (IOException inner) {
{code}
If there was IOE from the move call, the source might be left around.

Patch v1 is one way to not leave source behind. Another option is to delete the 
source.

> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Yogesh BG
> Attachments: 5998.v1.txt
>
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> 09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> 

[jira] [Updated] (KAFKA-5998) /.checkpoint.tmp Not found exception

2017-09-30 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5998:
---
Priority: Major  (was: Trivial)

> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Yogesh BG
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> 09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  

[jira] [Assigned] (KAFKA-5967) Ineffective check of negative value in CompositeReadOnlyKeyValueStore#approximateNumEntries()

2017-09-30 Thread siva santhalingam (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

siva santhalingam reassigned KAFKA-5967:


Assignee: siva santhalingam

> Ineffective check of negative value in 
> CompositeReadOnlyKeyValueStore#approximateNumEntries()
> -
>
> Key: KAFKA-5967
> URL: https://issues.apache.org/jira/browse/KAFKA-5967
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: siva santhalingam
>Priority: Minor
>  Labels: beginner, newbie
>
> {code}
> long total = 0;
> for (ReadOnlyKeyValueStore store : stores) {
> total += store.approximateNumEntries();
> }
> return total < 0 ? Long.MAX_VALUE : total;
> {code}
> The check for negative value seems to account for wrapping.
> However, wrapping can happen within the for loop. So the check should be 
> performed inside the loop.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5972) Flatten SMT does not work with null values

2017-09-30 Thread siva santhalingam (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187181#comment-16187181
 ] 

siva santhalingam commented on KAFKA-5972:
--

Hi [~tomas.zuk...@gmail.com] Can i assign this to myself? Thanks!

> Flatten SMT does not work with null values
> --
>
> Key: KAFKA-5972
> URL: https://issues.apache.org/jira/browse/KAFKA-5972
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0, 0.11.0.1
>Reporter: Tomas Zuklys
>Priority: Minor
>  Labels: easyfix, patch
> Attachments: kafka-transforms.patch
>
>
> Hi,
> I noticed a bug in Flatten SMT while doing tests with different SMTs that are 
> provided out-of-box.
> Flatten SMT does not work as expected with schemaless JSON that has 
> properties with null values. 
> Example json:
> {code}
>   {A={D=dValue, B=null, C=cValue}}
> {code}
> The issue is in if statement that checks for null value.
> Current version:
> {code}
>   for (Map.Entry entry : originalRecord.entrySet()) {
> final String fieldName = fieldName(fieldNamePrefix, 
> entry.getKey());
> Object value = entry.getValue();
> if (value == null) {
> newRecord.put(fieldName(fieldNamePrefix, entry.getKey()), 
> null);
> return;
> }
> ...
> {code}
> should be
> {code}
>   for (Map.Entry entry : originalRecord.entrySet()) {
> final String fieldName = fieldName(fieldNamePrefix, 
> entry.getKey());
> Object value = entry.getValue();
> if (value == null) {
> newRecord.put(fieldName(fieldNamePrefix, entry.getKey()), 
> null);
> continue;
> }
> {code}
> I have attached a patch containing the fix for this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5998) /.checkpoint.tmp Not found exception

2017-09-30 Thread Yogesh BG (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yogesh BG updated KAFKA-5998:
-
Description: 
I have one kafka broker and one kafka stream running... I am running its since 
two days under load of around 2500 msgs per second.. On third day am getting 
below exception for some of the partitions, I have 16 partitions only 0_0 and 
0_1 gives this error

{{
09:43:25.955 [ks_0_inst-StreamThread-6] WARN  o.a.k.s.p.i.ProcessorStateManager 
- Failed to write checkpoint file to 
/data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
java.io.FileNotFoundException: 
/data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or directory)
at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
at java.io.FileOutputStream.(FileOutputStream.java:221) 
~[na:1.7.0_111]
at java.io.FileOutputStream.(FileOutputStream.java:171) 
~[na:1.7.0_111]
at 
org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
 ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
 ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
/data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
java.io.FileNotFoundException: 
/data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or directory)
at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
at java.io.FileOutputStream.(FileOutputStream.java:221) 
~[na:1.7.0_111]
at java.io.FileOutputStream.(FileOutputStream.java:171) 
~[na:1.7.0_111]
at 
org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
 ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
 ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]

[jira] [Resolved] (KAFKA-5997) acks=all does not seem to be honoured

2017-09-30 Thread Ronald van de Kuil (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ronald van de Kuil resolved KAFKA-5997.
---
Resolution: Not A Bug

> acks=all does not seem to be honoured
> -
>
> Key: KAFKA-5997
> URL: https://issues.apache.org/jira/browse/KAFKA-5997
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.11.0.0
>Reporter: Ronald van de Kuil
>
> I have a 3 node Kafka cluster. I made a topic with 1 partition with a 
> replication factor of 2. 
> The replicas landed on broker 1 and 3. 
> When I stopped the leader I still could produce on it. Also, I saw the 
> consumer consume it.
> I produced with acks=all:
> [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig 
> values: 
>   acks = all
> The documentations says says about acc=all:
> "This means the leader will wait for the full set of in-sync replicas to 
> acknowledge the record."
> The producer api returns the offset which is present in the Metadatarecord.
> I would have expected that producer could not publish because only 1 replica 
> is in sync.
> This is the output of the topic state:
> Topic:topic1PartitionCount:1ReplicationFactor:2 Configs:
> Topic: topic1   Partition: 0Leader: 1   Replicas: 1,3   Isr: 1
> I am wondering whether or not this is a bug in the current version of the API 
> or whether the documentation (or my understanding) is eligible for an update? 
> I expect the former.
> If there is anything that I can do, please let me know.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5997) acks=all does not seem to be honoured

2017-09-30 Thread Ronald van de Kuil (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186990#comment-16186990
 ] 

Ronald van de Kuil commented on KAFKA-5997:
---

Thank you Manikumar.

After adding the configuration it works as expected:

[main] ERROR eu.bde.sc4pilot.kafka.Producer - 
org.apache.kafka.common.errors.NotEnoughReplicasException: Messages are 
rejected since there are fewer in-sync replicas than required.


> acks=all does not seem to be honoured
> -
>
> Key: KAFKA-5997
> URL: https://issues.apache.org/jira/browse/KAFKA-5997
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.11.0.0
>Reporter: Ronald van de Kuil
>
> I have a 3 node Kafka cluster. I made a topic with 1 partition with a 
> replication factor of 2. 
> The replicas landed on broker 1 and 3. 
> When I stopped the leader I still could produce on it. Also, I saw the 
> consumer consume it.
> I produced with acks=all:
> [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig 
> values: 
>   acks = all
> The documentations says says about acc=all:
> "This means the leader will wait for the full set of in-sync replicas to 
> acknowledge the record."
> The producer api returns the offset which is present in the Metadatarecord.
> I would have expected that producer could not publish because only 1 replica 
> is in sync.
> This is the output of the topic state:
> Topic:topic1PartitionCount:1ReplicationFactor:2 Configs:
> Topic: topic1   Partition: 0Leader: 1   Replicas: 1,3   Isr: 1
> I am wondering whether or not this is a bug in the current version of the API 
> or whether the documentation (or my understanding) is eligible for an update? 
> I expect the former.
> If there is anything that I can do, please let me know.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5997) acks=all does not seem to be honoured

2017-09-30 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186974#comment-16186974
 ] 

Manikumar commented on KAFKA-5997:
--

 We need to use min.insync.replicas to enforce durability guarantees. Pl read 
docs about "min.insync.replicas" here:
https://kafka.apache.org/documentation/#topicconfigs


> acks=all does not seem to be honoured
> -
>
> Key: KAFKA-5997
> URL: https://issues.apache.org/jira/browse/KAFKA-5997
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.11.0.0
>Reporter: Ronald van de Kuil
>
> I have a 3 node Kafka cluster. I made a topic with 1 partition with a 
> replication factor of 2. 
> The replicas landed on broker 1 and 3. 
> When I stopped the leader I still could produce on it. Also, I saw the 
> consumer consume it.
> I produced with acks=all:
> [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig 
> values: 
>   acks = all
> The documentations says says about acc=all:
> "This means the leader will wait for the full set of in-sync replicas to 
> acknowledge the record."
> The producer api returns the offset which is present in the Metadatarecord.
> I would have expected that producer could not publish because only 1 replica 
> is in sync.
> This is the output of the topic state:
> Topic:topic1PartitionCount:1ReplicationFactor:2 Configs:
> Topic: topic1   Partition: 0Leader: 1   Replicas: 1,3   Isr: 1
> I am wondering whether or not this is a bug in the current version of the API 
> or whether the documentation (or my understanding) is eligible for an update? 
> I expect the former.
> If there is anything that I can do, please let me know.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5997) acks=all does not seem to be honoured

2017-09-30 Thread Ronald van de Kuil (JIRA)
Ronald van de Kuil created KAFKA-5997:
-

 Summary: acks=all does not seem to be honoured
 Key: KAFKA-5997
 URL: https://issues.apache.org/jira/browse/KAFKA-5997
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.11.0.0
Reporter: Ronald van de Kuil


I have a 3 node Kafka cluster. I made a topic with 1 partition with a 
replication factor of 2. 

The replicas landed on broker 1 and 3. 

When I stopped the leader I still could produce on it. Also, I saw the consumer 
consume it.

I produced with acks=all:

[main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig 
values: 
acks = all

The documentations says says about acc=all:
"This means the leader will wait for the full set of in-sync replicas to 
acknowledge the record."

The producer api returns the offset which is present in the Metadatarecord.

I would have expected that producer could not publish because only 1 replica is 
in sync.

This is the output of the topic state:

Topic:topic1PartitionCount:1ReplicationFactor:2 Configs:
Topic: topic1   Partition: 0Leader: 1   Replicas: 1,3   Isr: 1

I am wondering whether or not this is a bug in the current version of the API 
or whether the documentation (or my understanding) is eligible for an update? I 
expect the former.

If there is anything that I can do, please let me know.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)