[jira] [Comment Edited] (KAFKA-6052) Windows: Consumers not polling when isolation.level=read_committed
[ https://issues.apache.org/jira/browse/KAFKA-6052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434927#comment-16434927 ] Veera edited comment on KAFKA-6052 at 4/12/18 4:58 AM: --- Hi Jason / Vahid Can you please update about the release date of 1.1.1 or 1.2.0 ? Regards Veeru was (Author: vmallavarapu): Hi Jason / Vahid Can you please update about the release date of 1.1.1 or 1.2.0 ? Regards Veeru > Windows: Consumers not polling when isolation.level=read_committed > --- > > Key: KAFKA-6052 > URL: https://issues.apache.org/jira/browse/KAFKA-6052 > Project: Kafka > Issue Type: Bug > Components: consumer, producer >Affects Versions: 0.11.0.0, 1.0.1 > Environment: Windows 10. All processes running in embedded mode. >Reporter: Ansel Zandegran >Assignee: Vahid Hashemian >Priority: Major > Labels: transactions, windows > Fix For: 1.2.0, 1.1.1 > > Attachments: Prducer_Consumer.log, Separate_Logs.zip, kafka-logs.zip, > logFile.log > > > *The same code is running fine in Linux.* I am trying to send a transactional > record with exactly once schematics. These are my producer, consumer and > broker setups. > public void sendWithTTemp(String topic, EHEvent event) { > Properties props = new Properties(); > props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, > "localhost:9092,localhost:9093,localhost:9094"); > //props.put("bootstrap.servers", > "34.240.248.190:9092,52.50.95.30:9092,52.50.95.30:9092"); > props.put(ProducerConfig.ACKS_CONFIG, "all"); > props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true); > props.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, "1"); > props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384); > props.put(ProducerConfig.LINGER_MS_CONFIG, 1); > props.put("transactional.id", "TID" + transactionId.incrementAndGet()); > props.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, "5000"); > Producerproducer = > new KafkaProducer<>(props, > new StringSerializer(), > new StringSerializer()); > Logger.log(this, "Initializing transaction..."); > producer.initTransactions(); > Logger.log(this, "Initializing done."); > try { > Logger.log(this, "Begin transaction..."); > producer.beginTransaction(); > Logger.log(this, "Begin transaction done."); > Logger.log(this, "Sending events..."); > producer.send(new ProducerRecord<>(topic, > event.getKey().toString(), > event.getValue().toString())); > Logger.log(this, "Sending events done."); > Logger.log(this, "Committing..."); > producer.commitTransaction(); > Logger.log(this, "Committing done."); > } catch (ProducerFencedException | OutOfOrderSequenceException > | AuthorizationException e) { > producer.close(); > e.printStackTrace(); > } catch (KafkaException e) { > producer.abortTransaction(); > e.printStackTrace(); > } > producer.close(); > } > *In Consumer* > I have set isolation.level=read_committed > *In 3 Brokers* > I'm running with the following properties > Properties props = new Properties(); > props.setProperty("broker.id", "" + i); > props.setProperty("listeners", "PLAINTEXT://:909" + (2 + i)); > props.setProperty("log.dirs", Configuration.KAFKA_DATA_PATH + "\\B" + > i); > props.setProperty("num.partitions", "1"); > props.setProperty("zookeeper.connect", "localhost:2181"); > props.setProperty("zookeeper.connection.timeout.ms", "6000"); > props.setProperty("min.insync.replicas", "2"); > props.setProperty("offsets.topic.replication.factor", "2"); > props.setProperty("offsets.topic.num.partitions", "1"); > props.setProperty("transaction.state.log.num.partitions", "2"); > props.setProperty("transaction.state.log.replication.factor", "2"); > props.setProperty("transaction.state.log.min.isr", "2"); > I am not getting any records in the consumer. When I set > isolation.level=read_uncommitted, I get the records. I assume that the > records are not getting commited. What could be the problem? log attached -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6052) Windows: Consumers not polling when isolation.level=read_committed
[ https://issues.apache.org/jira/browse/KAFKA-6052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16434927#comment-16434927 ] Veera commented on KAFKA-6052: -- Hi Jason / Vahid Can you please update about the release date of 1.1.1 or 1.2.0 ? Regards Veeru > Windows: Consumers not polling when isolation.level=read_committed > --- > > Key: KAFKA-6052 > URL: https://issues.apache.org/jira/browse/KAFKA-6052 > Project: Kafka > Issue Type: Bug > Components: consumer, producer >Affects Versions: 0.11.0.0, 1.0.1 > Environment: Windows 10. All processes running in embedded mode. >Reporter: Ansel Zandegran >Assignee: Vahid Hashemian >Priority: Major > Labels: transactions, windows > Fix For: 1.2.0, 1.1.1 > > Attachments: Prducer_Consumer.log, Separate_Logs.zip, kafka-logs.zip, > logFile.log > > > *The same code is running fine in Linux.* I am trying to send a transactional > record with exactly once schematics. These are my producer, consumer and > broker setups. > public void sendWithTTemp(String topic, EHEvent event) { > Properties props = new Properties(); > props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, > "localhost:9092,localhost:9093,localhost:9094"); > //props.put("bootstrap.servers", > "34.240.248.190:9092,52.50.95.30:9092,52.50.95.30:9092"); > props.put(ProducerConfig.ACKS_CONFIG, "all"); > props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true); > props.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, "1"); > props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384); > props.put(ProducerConfig.LINGER_MS_CONFIG, 1); > props.put("transactional.id", "TID" + transactionId.incrementAndGet()); > props.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, "5000"); > Producerproducer = > new KafkaProducer<>(props, > new StringSerializer(), > new StringSerializer()); > Logger.log(this, "Initializing transaction..."); > producer.initTransactions(); > Logger.log(this, "Initializing done."); > try { > Logger.log(this, "Begin transaction..."); > producer.beginTransaction(); > Logger.log(this, "Begin transaction done."); > Logger.log(this, "Sending events..."); > producer.send(new ProducerRecord<>(topic, > event.getKey().toString(), > event.getValue().toString())); > Logger.log(this, "Sending events done."); > Logger.log(this, "Committing..."); > producer.commitTransaction(); > Logger.log(this, "Committing done."); > } catch (ProducerFencedException | OutOfOrderSequenceException > | AuthorizationException e) { > producer.close(); > e.printStackTrace(); > } catch (KafkaException e) { > producer.abortTransaction(); > e.printStackTrace(); > } > producer.close(); > } > *In Consumer* > I have set isolation.level=read_committed > *In 3 Brokers* > I'm running with the following properties > Properties props = new Properties(); > props.setProperty("broker.id", "" + i); > props.setProperty("listeners", "PLAINTEXT://:909" + (2 + i)); > props.setProperty("log.dirs", Configuration.KAFKA_DATA_PATH + "\\B" + > i); > props.setProperty("num.partitions", "1"); > props.setProperty("zookeeper.connect", "localhost:2181"); > props.setProperty("zookeeper.connection.timeout.ms", "6000"); > props.setProperty("min.insync.replicas", "2"); > props.setProperty("offsets.topic.replication.factor", "2"); > props.setProperty("offsets.topic.num.partitions", "1"); > props.setProperty("transaction.state.log.num.partitions", "2"); > props.setProperty("transaction.state.log.replication.factor", "2"); > props.setProperty("transaction.state.log.min.isr", "2"); > I am not getting any records in the consumer. When I set > isolation.level=read_uncommitted, I get the records. I assume that the > records are not getting commited. What could be the problem? log attached -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (KAFKA-6052) Windows: Consumers not polling when isolation.level=read_committed
[ https://issues.apache.org/jira/browse/KAFKA-6052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430366#comment-16430366 ] Veera edited comment on KAFKA-6052 at 4/11/18 7:28 AM: --- Hi Jason Could you please confirm the release date of either 1.1.1. or 1.2.0 ? will wait if it happens in this month or else we take the fix and apply to 1.0.0 version. Thanks Veeru was (Author: vmallavarapu): Hi Jason Could you please confirm the release date of either 1.1.1. or 1.2.0 ? Many Thanks. > Windows: Consumers not polling when isolation.level=read_committed > --- > > Key: KAFKA-6052 > URL: https://issues.apache.org/jira/browse/KAFKA-6052 > Project: Kafka > Issue Type: Bug > Components: consumer, producer >Affects Versions: 0.11.0.0, 1.0.1 > Environment: Windows 10. All processes running in embedded mode. >Reporter: Ansel Zandegran >Assignee: Vahid Hashemian >Priority: Major > Labels: transactions, windows > Fix For: 1.2.0, 1.1.1 > > Attachments: Prducer_Consumer.log, Separate_Logs.zip, kafka-logs.zip, > logFile.log > > > *The same code is running fine in Linux.* I am trying to send a transactional > record with exactly once schematics. These are my producer, consumer and > broker setups. > public void sendWithTTemp(String topic, EHEvent event) { > Properties props = new Properties(); > props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, > "localhost:9092,localhost:9093,localhost:9094"); > //props.put("bootstrap.servers", > "34.240.248.190:9092,52.50.95.30:9092,52.50.95.30:9092"); > props.put(ProducerConfig.ACKS_CONFIG, "all"); > props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true); > props.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, "1"); > props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384); > props.put(ProducerConfig.LINGER_MS_CONFIG, 1); > props.put("transactional.id", "TID" + transactionId.incrementAndGet()); > props.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, "5000"); > Producerproducer = > new KafkaProducer<>(props, > new StringSerializer(), > new StringSerializer()); > Logger.log(this, "Initializing transaction..."); > producer.initTransactions(); > Logger.log(this, "Initializing done."); > try { > Logger.log(this, "Begin transaction..."); > producer.beginTransaction(); > Logger.log(this, "Begin transaction done."); > Logger.log(this, "Sending events..."); > producer.send(new ProducerRecord<>(topic, > event.getKey().toString(), > event.getValue().toString())); > Logger.log(this, "Sending events done."); > Logger.log(this, "Committing..."); > producer.commitTransaction(); > Logger.log(this, "Committing done."); > } catch (ProducerFencedException | OutOfOrderSequenceException > | AuthorizationException e) { > producer.close(); > e.printStackTrace(); > } catch (KafkaException e) { > producer.abortTransaction(); > e.printStackTrace(); > } > producer.close(); > } > *In Consumer* > I have set isolation.level=read_committed > *In 3 Brokers* > I'm running with the following properties > Properties props = new Properties(); > props.setProperty("broker.id", "" + i); > props.setProperty("listeners", "PLAINTEXT://:909" + (2 + i)); > props.setProperty("log.dirs", Configuration.KAFKA_DATA_PATH + "\\B" + > i); > props.setProperty("num.partitions", "1"); > props.setProperty("zookeeper.connect", "localhost:2181"); > props.setProperty("zookeeper.connection.timeout.ms", "6000"); > props.setProperty("min.insync.replicas", "2"); > props.setProperty("offsets.topic.replication.factor", "2"); > props.setProperty("offsets.topic.num.partitions", "1"); > props.setProperty("transaction.state.log.num.partitions", "2"); > props.setProperty("transaction.state.log.replication.factor", "2"); > props.setProperty("transaction.state.log.min.isr", "2"); > I am not getting any records in the consumer. When I set > isolation.level=read_uncommitted, I get the records. I assume that the > records are not getting commited. What could be the problem? log attached -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6052) Windows: Consumers not polling when isolation.level=read_committed
[ https://issues.apache.org/jira/browse/KAFKA-6052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430366#comment-16430366 ] Veera commented on KAFKA-6052: -- Hi Jason Could you please confirm the release date of either 1.1.1. or 1.2.0 ? Many Thanks. > Windows: Consumers not polling when isolation.level=read_committed > --- > > Key: KAFKA-6052 > URL: https://issues.apache.org/jira/browse/KAFKA-6052 > Project: Kafka > Issue Type: Bug > Components: consumer, producer >Affects Versions: 0.11.0.0, 1.0.1 > Environment: Windows 10. All processes running in embedded mode. >Reporter: Ansel Zandegran >Assignee: Vahid Hashemian >Priority: Major > Labels: transactions, windows > Fix For: 1.2.0, 1.1.1 > > Attachments: Prducer_Consumer.log, Separate_Logs.zip, kafka-logs.zip, > logFile.log > > > *The same code is running fine in Linux.* I am trying to send a transactional > record with exactly once schematics. These are my producer, consumer and > broker setups. > public void sendWithTTemp(String topic, EHEvent event) { > Properties props = new Properties(); > props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, > "localhost:9092,localhost:9093,localhost:9094"); > //props.put("bootstrap.servers", > "34.240.248.190:9092,52.50.95.30:9092,52.50.95.30:9092"); > props.put(ProducerConfig.ACKS_CONFIG, "all"); > props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true); > props.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, "1"); > props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384); > props.put(ProducerConfig.LINGER_MS_CONFIG, 1); > props.put("transactional.id", "TID" + transactionId.incrementAndGet()); > props.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, "5000"); > Producerproducer = > new KafkaProducer<>(props, > new StringSerializer(), > new StringSerializer()); > Logger.log(this, "Initializing transaction..."); > producer.initTransactions(); > Logger.log(this, "Initializing done."); > try { > Logger.log(this, "Begin transaction..."); > producer.beginTransaction(); > Logger.log(this, "Begin transaction done."); > Logger.log(this, "Sending events..."); > producer.send(new ProducerRecord<>(topic, > event.getKey().toString(), > event.getValue().toString())); > Logger.log(this, "Sending events done."); > Logger.log(this, "Committing..."); > producer.commitTransaction(); > Logger.log(this, "Committing done."); > } catch (ProducerFencedException | OutOfOrderSequenceException > | AuthorizationException e) { > producer.close(); > e.printStackTrace(); > } catch (KafkaException e) { > producer.abortTransaction(); > e.printStackTrace(); > } > producer.close(); > } > *In Consumer* > I have set isolation.level=read_committed > *In 3 Brokers* > I'm running with the following properties > Properties props = new Properties(); > props.setProperty("broker.id", "" + i); > props.setProperty("listeners", "PLAINTEXT://:909" + (2 + i)); > props.setProperty("log.dirs", Configuration.KAFKA_DATA_PATH + "\\B" + > i); > props.setProperty("num.partitions", "1"); > props.setProperty("zookeeper.connect", "localhost:2181"); > props.setProperty("zookeeper.connection.timeout.ms", "6000"); > props.setProperty("min.insync.replicas", "2"); > props.setProperty("offsets.topic.replication.factor", "2"); > props.setProperty("offsets.topic.num.partitions", "1"); > props.setProperty("transaction.state.log.num.partitions", "2"); > props.setProperty("transaction.state.log.replication.factor", "2"); > props.setProperty("transaction.state.log.min.isr", "2"); > I am not getting any records in the consumer. When I set > isolation.level=read_uncommitted, I get the records. I assume that the > records are not getting commited. What could be the problem? log attached -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6052) Windows: Consumers not polling when isolation.level=read_committed
[ https://issues.apache.org/jira/browse/KAFKA-6052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387652#comment-16387652 ] Veera commented on KAFKA-6052: -- Hi Jason This fix worked for us. Thanks for everything. > Windows: Consumers not polling when isolation.level=read_committed > --- > > Key: KAFKA-6052 > URL: https://issues.apache.org/jira/browse/KAFKA-6052 > Project: Kafka > Issue Type: Bug > Components: consumer, producer >Affects Versions: 0.11.0.0 > Environment: Windows 10. All processes running in embedded mode. >Reporter: Ansel Zandegran >Priority: Major > Labels: windows > Attachments: Prducer_Consumer.log, Separate_Logs.zip, kafka-logs.zip, > logFile.log > > > *The same code is running fine in Linux.* I am trying to send a transactional > record with exactly once schematics. These are my producer, consumer and > broker setups. > public void sendWithTTemp(String topic, EHEvent event) { > Properties props = new Properties(); > props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, > "localhost:9092,localhost:9093,localhost:9094"); > //props.put("bootstrap.servers", > "34.240.248.190:9092,52.50.95.30:9092,52.50.95.30:9092"); > props.put(ProducerConfig.ACKS_CONFIG, "all"); > props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true); > props.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, "1"); > props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384); > props.put(ProducerConfig.LINGER_MS_CONFIG, 1); > props.put("transactional.id", "TID" + transactionId.incrementAndGet()); > props.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, "5000"); > Producerproducer = > new KafkaProducer<>(props, > new StringSerializer(), > new StringSerializer()); > Logger.log(this, "Initializing transaction..."); > producer.initTransactions(); > Logger.log(this, "Initializing done."); > try { > Logger.log(this, "Begin transaction..."); > producer.beginTransaction(); > Logger.log(this, "Begin transaction done."); > Logger.log(this, "Sending events..."); > producer.send(new ProducerRecord<>(topic, > event.getKey().toString(), > event.getValue().toString())); > Logger.log(this, "Sending events done."); > Logger.log(this, "Committing..."); > producer.commitTransaction(); > Logger.log(this, "Committing done."); > } catch (ProducerFencedException | OutOfOrderSequenceException > | AuthorizationException e) { > producer.close(); > e.printStackTrace(); > } catch (KafkaException e) { > producer.abortTransaction(); > e.printStackTrace(); > } > producer.close(); > } > *In Consumer* > I have set isolation.level=read_committed > *In 3 Brokers* > I'm running with the following properties > Properties props = new Properties(); > props.setProperty("broker.id", "" + i); > props.setProperty("listeners", "PLAINTEXT://:909" + (2 + i)); > props.setProperty("log.dirs", Configuration.KAFKA_DATA_PATH + "\\B" + > i); > props.setProperty("num.partitions", "1"); > props.setProperty("zookeeper.connect", "localhost:2181"); > props.setProperty("zookeeper.connection.timeout.ms", "6000"); > props.setProperty("min.insync.replicas", "2"); > props.setProperty("offsets.topic.replication.factor", "2"); > props.setProperty("offsets.topic.num.partitions", "1"); > props.setProperty("transaction.state.log.num.partitions", "2"); > props.setProperty("transaction.state.log.replication.factor", "2"); > props.setProperty("transaction.state.log.min.isr", "2"); > I am not getting any records in the consumer. When I set > isolation.level=read_uncommitted, I get the records. I assume that the > records are not getting commited. What could be the problem? log attached -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6052) Windows: Consumers not polling when isolation.level=read_committed
[ https://issues.apache.org/jira/browse/KAFKA-6052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16376462#comment-16376462 ] Veera commented on KAFKA-6052: -- HI [~hachikuji] can you please refer the https://issues.apache.org/jira/browse/KAFKA-6153. We pasted and attached the logs there Regards Veeru > Windows: Consumers not polling when isolation.level=read_committed > --- > > Key: KAFKA-6052 > URL: https://issues.apache.org/jira/browse/KAFKA-6052 > Project: Kafka > Issue Type: Bug > Components: consumer, producer >Affects Versions: 0.11.0.0 > Environment: Windows 10. All processes running in embedded mode. >Reporter: Ansel Zandegran >Priority: Major > Labels: windows > Attachments: Prducer_Consumer.log, Separate_Logs.zip, kafka-logs.zip, > logFile.log > > > *The same code is running fine in Linux.* I am trying to send a transactional > record with exactly once schematics. These are my producer, consumer and > broker setups. > public void sendWithTTemp(String topic, EHEvent event) { > Properties props = new Properties(); > props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, > "localhost:9092,localhost:9093,localhost:9094"); > //props.put("bootstrap.servers", > "34.240.248.190:9092,52.50.95.30:9092,52.50.95.30:9092"); > props.put(ProducerConfig.ACKS_CONFIG, "all"); > props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true); > props.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, "1"); > props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384); > props.put(ProducerConfig.LINGER_MS_CONFIG, 1); > props.put("transactional.id", "TID" + transactionId.incrementAndGet()); > props.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, "5000"); > Producerproducer = > new KafkaProducer<>(props, > new StringSerializer(), > new StringSerializer()); > Logger.log(this, "Initializing transaction..."); > producer.initTransactions(); > Logger.log(this, "Initializing done."); > try { > Logger.log(this, "Begin transaction..."); > producer.beginTransaction(); > Logger.log(this, "Begin transaction done."); > Logger.log(this, "Sending events..."); > producer.send(new ProducerRecord<>(topic, > event.getKey().toString(), > event.getValue().toString())); > Logger.log(this, "Sending events done."); > Logger.log(this, "Committing..."); > producer.commitTransaction(); > Logger.log(this, "Committing done."); > } catch (ProducerFencedException | OutOfOrderSequenceException > | AuthorizationException e) { > producer.close(); > e.printStackTrace(); > } catch (KafkaException e) { > producer.abortTransaction(); > e.printStackTrace(); > } > producer.close(); > } > *In Consumer* > I have set isolation.level=read_committed > *In 3 Brokers* > I'm running with the following properties > Properties props = new Properties(); > props.setProperty("broker.id", "" + i); > props.setProperty("listeners", "PLAINTEXT://:909" + (2 + i)); > props.setProperty("log.dirs", Configuration.KAFKA_DATA_PATH + "\\B" + > i); > props.setProperty("num.partitions", "1"); > props.setProperty("zookeeper.connect", "localhost:2181"); > props.setProperty("zookeeper.connection.timeout.ms", "6000"); > props.setProperty("min.insync.replicas", "2"); > props.setProperty("offsets.topic.replication.factor", "2"); > props.setProperty("offsets.topic.num.partitions", "1"); > props.setProperty("transaction.state.log.num.partitions", "2"); > props.setProperty("transaction.state.log.replication.factor", "2"); > props.setProperty("transaction.state.log.min.isr", "2"); > I am not getting any records in the consumer. When I set > isolation.level=read_uncommitted, I get the records. I assume that the > records are not getting commited. What could be the problem? log attached -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6052) Windows: Consumers not polling when isolation.level=read_committed
[ https://issues.apache.org/jira/browse/KAFKA-6052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374153#comment-16374153 ] Veera commented on KAFKA-6052: -- Hi All Please let us know when are you planning to provide the fix ? Regards Veeru > Windows: Consumers not polling when isolation.level=read_committed > --- > > Key: KAFKA-6052 > URL: https://issues.apache.org/jira/browse/KAFKA-6052 > Project: Kafka > Issue Type: Bug > Components: consumer, producer >Affects Versions: 0.11.0.0 > Environment: Windows 10. All processes running in embedded mode. >Reporter: Ansel Zandegran >Priority: Major > Attachments: Prducer_Consumer.log, Separate_Logs.zip, kafka-logs.zip, > logFile.log > > > *The same code is running fine in Linux.* I am trying to send a transactional > record with exactly once schematics. These are my producer, consumer and > broker setups. > public void sendWithTTemp(String topic, EHEvent event) { > Properties props = new Properties(); > props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, > "localhost:9092,localhost:9093,localhost:9094"); > //props.put("bootstrap.servers", > "34.240.248.190:9092,52.50.95.30:9092,52.50.95.30:9092"); > props.put(ProducerConfig.ACKS_CONFIG, "all"); > props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true); > props.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, "1"); > props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384); > props.put(ProducerConfig.LINGER_MS_CONFIG, 1); > props.put("transactional.id", "TID" + transactionId.incrementAndGet()); > props.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, "5000"); > Producerproducer = > new KafkaProducer<>(props, > new StringSerializer(), > new StringSerializer()); > Logger.log(this, "Initializing transaction..."); > producer.initTransactions(); > Logger.log(this, "Initializing done."); > try { > Logger.log(this, "Begin transaction..."); > producer.beginTransaction(); > Logger.log(this, "Begin transaction done."); > Logger.log(this, "Sending events..."); > producer.send(new ProducerRecord<>(topic, > event.getKey().toString(), > event.getValue().toString())); > Logger.log(this, "Sending events done."); > Logger.log(this, "Committing..."); > producer.commitTransaction(); > Logger.log(this, "Committing done."); > } catch (ProducerFencedException | OutOfOrderSequenceException > | AuthorizationException e) { > producer.close(); > e.printStackTrace(); > } catch (KafkaException e) { > producer.abortTransaction(); > e.printStackTrace(); > } > producer.close(); > } > *In Consumer* > I have set isolation.level=read_committed > *In 3 Brokers* > I'm running with the following properties > Properties props = new Properties(); > props.setProperty("broker.id", "" + i); > props.setProperty("listeners", "PLAINTEXT://:909" + (2 + i)); > props.setProperty("log.dirs", Configuration.KAFKA_DATA_PATH + "\\B" + > i); > props.setProperty("num.partitions", "1"); > props.setProperty("zookeeper.connect", "localhost:2181"); > props.setProperty("zookeeper.connection.timeout.ms", "6000"); > props.setProperty("min.insync.replicas", "2"); > props.setProperty("offsets.topic.replication.factor", "2"); > props.setProperty("offsets.topic.num.partitions", "1"); > props.setProperty("transaction.state.log.num.partitions", "2"); > props.setProperty("transaction.state.log.replication.factor", "2"); > props.setProperty("transaction.state.log.min.isr", "2"); > I am not getting any records in the consumer. When I set > isolation.level=read_uncommitted, I get the records. I assume that the > records are not getting commited. What could be the problem? log attached -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6052) Windows: Consumers not polling when isolation.level=read_committed
[ https://issues.apache.org/jira/browse/KAFKA-6052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16249151#comment-16249151 ] Veera commented on KAFKA-6052: -- Hi All We are encountering the same issue and it is a blocker for kafka upgrade. Our setup also in windows completely. The same scenario is tested on Ubuntu 14.04 and it is working as expected. Any quick resolution to this issue is very much appreciated. Thanks Veeru > Windows: Consumers not polling when isolation.level=read_committed > --- > > Key: KAFKA-6052 > URL: https://issues.apache.org/jira/browse/KAFKA-6052 > Project: Kafka > Issue Type: Bug > Components: consumer, producer >Affects Versions: 0.11.0.0 > Environment: Windows 10. All processes running in embedded mode. >Reporter: Ansel Zandegran > Attachments: Prducer_Consumer.log, Separate_Logs.zip, kafka-logs.zip, > logFile.log > > > *The same code is running fine in Linux.* I am trying to send a transactional > record with exactly once schematics. These are my producer, consumer and > broker setups. > public void sendWithTTemp(String topic, EHEvent event) { > Properties props = new Properties(); > props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, > "localhost:9092,localhost:9093,localhost:9094"); > //props.put("bootstrap.servers", > "34.240.248.190:9092,52.50.95.30:9092,52.50.95.30:9092"); > props.put(ProducerConfig.ACKS_CONFIG, "all"); > props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true); > props.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, "1"); > props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384); > props.put(ProducerConfig.LINGER_MS_CONFIG, 1); > props.put("transactional.id", "TID" + transactionId.incrementAndGet()); > props.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, "5000"); > Producerproducer = > new KafkaProducer<>(props, > new StringSerializer(), > new StringSerializer()); > Logger.log(this, "Initializing transaction..."); > producer.initTransactions(); > Logger.log(this, "Initializing done."); > try { > Logger.log(this, "Begin transaction..."); > producer.beginTransaction(); > Logger.log(this, "Begin transaction done."); > Logger.log(this, "Sending events..."); > producer.send(new ProducerRecord<>(topic, > event.getKey().toString(), > event.getValue().toString())); > Logger.log(this, "Sending events done."); > Logger.log(this, "Committing..."); > producer.commitTransaction(); > Logger.log(this, "Committing done."); > } catch (ProducerFencedException | OutOfOrderSequenceException > | AuthorizationException e) { > producer.close(); > e.printStackTrace(); > } catch (KafkaException e) { > producer.abortTransaction(); > e.printStackTrace(); > } > producer.close(); > } > *In Consumer* > I have set isolation.level=read_committed > *In 3 Brokers* > I'm running with the following properties > Properties props = new Properties(); > props.setProperty("broker.id", "" + i); > props.setProperty("listeners", "PLAINTEXT://:909" + (2 + i)); > props.setProperty("log.dirs", Configuration.KAFKA_DATA_PATH + "\\B" + > i); > props.setProperty("num.partitions", "1"); > props.setProperty("zookeeper.connect", "localhost:2181"); > props.setProperty("zookeeper.connection.timeout.ms", "6000"); > props.setProperty("min.insync.replicas", "2"); > props.setProperty("offsets.topic.replication.factor", "2"); > props.setProperty("offsets.topic.num.partitions", "1"); > props.setProperty("transaction.state.log.num.partitions", "2"); > props.setProperty("transaction.state.log.replication.factor", "2"); > props.setProperty("transaction.state.log.min.isr", "2"); > I am not getting any records in the consumer. When I set > isolation.level=read_uncommitted, I get the records. I assume that the > records are not getting commited. What could be the problem? log attached -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (KAFKA-6153) Kafka Transactional Messaging does not work on windows but on linux
[ https://issues.apache.org/jira/browse/KAFKA-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16240299#comment-16240299 ] Veera commented on KAFKA-6153: -- However, there is no issue seen with producer.commitTransaction() method , but found above mentioned exception in server logs when debug is enabled. Kafka consumer is able to consume above published messages with read_uncommitted isolation level but not with read_committed. It has been tested on Windows 10 machine and all Kafka and Zookeeper running on Windows 10. > Kafka Transactional Messaging does not work on windows but on linux > --- > > Key: KAFKA-6153 > URL: https://issues.apache.org/jira/browse/KAFKA-6153 > Project: Kafka > Issue Type: Bug > Components: consumer, producer >Affects Versions: 0.11.0.1 >Reporter: Changhai Han >Priority: Critical > Attachments: TransactionalProducer_Notworking.txt > > > As mentioned in title, the kafka transaction messaging does not work on > windows but on linux. > The code is like below: > {code:java} > stringProducer.initTransactions(); > while(true){ > ConsumerRecordsrecords = > stringConsumer.poll(2000); > if(!records.isEmpty()){ > stringProducer.beginTransaction(); > try{ > for(ConsumerRecord record : records){ > LOGGER.info(record.value().toString()); > stringProducer.send(new ProducerRecord String>("kafka-test-out", record.value().toString())); > } > stringProducer.commitTransaction(); > }catch (ProducerFencedException e){ > LOGGER.warn(e.getMessage()); > stringProducer.close(); > stringConsumer.close(); > }catch (KafkaException e){ > LOGGER.warn(e.getMessage()); > stringProducer.abortTransaction(); > } > } > } > {code} > When I debug it, it seems to it stuck on committing the transaction. Does > anyone also experience the same thing? Is there any specific configs that i > need to add in the producer config? Thanks. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (KAFKA-6153) Kafka Transactional Messaging does not work on windows but on linux
[ https://issues.apache.org/jira/browse/KAFKA-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Veera updated KAFKA-6153: - Attachment: TransactionalProducer_Notworking.txt > Kafka Transactional Messaging does not work on windows but on linux > --- > > Key: KAFKA-6153 > URL: https://issues.apache.org/jira/browse/KAFKA-6153 > Project: Kafka > Issue Type: Bug > Components: consumer, producer >Affects Versions: 0.11.0.1 >Reporter: Changhai Han >Priority: Critical > Attachments: TransactionalProducer_Notworking.txt > > > As mentioned in title, the kafka transaction messaging does not work on > windows but on linux. > The code is like below: > {code:java} > stringProducer.initTransactions(); > while(true){ > ConsumerRecordsrecords = > stringConsumer.poll(2000); > if(!records.isEmpty()){ > stringProducer.beginTransaction(); > try{ > for(ConsumerRecord record : records){ > LOGGER.info(record.value().toString()); > stringProducer.send(new ProducerRecord String>("kafka-test-out", record.value().toString())); > } > stringProducer.commitTransaction(); > }catch (ProducerFencedException e){ > LOGGER.warn(e.getMessage()); > stringProducer.close(); > stringConsumer.close(); > }catch (KafkaException e){ > LOGGER.warn(e.getMessage()); > stringProducer.abortTransaction(); > } > } > } > {code} > When I debug it, it seems to it stuck on committing the transaction. Does > anyone also experience the same thing? Is there any specific configs that i > need to add in the producer config? Thanks. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (KAFKA-6153) Kafka Transactional Messaging does not work on windows but on linux
[ https://issues.apache.org/jira/browse/KAFKA-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16240251#comment-16240251 ] Veera edited comment on KAFKA-6153 at 11/6/17 1:10 PM: --- Hi All, We are also facing the same issue in windows but not tested in linux. Attached the log files for reference. It is observed that there is an exception pasted as below while committing the transaction. Your quick help in this regard really helpful to migrate Kafka to 0.11.0.1. [2017-11-06 18:32:56,192] DEBUG TransactionalId my-Txn-Id complete transition from Ongoing to TxnTransitMetadata(producerId=3004, producerEpoch=0, txnTimeoutMs=6, txnState=PrepareCommit, topicPartitions=Set(TopRatedContent2-0), txnStartTimestamp=1509973373689, txnLastUpdateTimestamp=1509973376190) (kafka.coordinator.transaction.TransactionMetadata) [2017-11-06 18:32:56,192] DEBUG TransactionalId my-Txn-Id complete transition from Ongoing to TxnTransitMetadata(producerId=3004, producerEpoch=0, txnTimeoutMs=6, txnState=PrepareCommit, topicPartitions=Set(TopRatedContent2-0), txnStartTimestamp=1509973373689, txnLastUpdateTimestamp=1509973376190) (kafka.coordinator.transaction.TransactionMetadata) [2017-11-06 18:32:56,192] DEBUG [Transaction State Manager 0]: Updating my-Txn-Id's transaction state to TxnTransitMetadata(producerId=3004, producerEpoch=0, txnTimeoutMs=6, txnState=PrepareCommit, topicPartitions=Set(TopRatedContent2-0), txnStartTimestamp=1509973373689, txnLastUpdateTimestamp=1509973376190) with coordinator epoch 2 for my-Txn-Id succeeded (kafka.coordinator.transaction.TransactionStateManager) [2017-11-06 18:32:56,192] DEBUG [Transaction State Manager 0]: Updating my-Txn-Id's transaction state to TxnTransitMetadata(producerId=3004, producerEpoch=0, txnTimeoutMs=6, txnState=PrepareCommit, topicPartitions=Set(TopRatedContent2-0), txnStartTimestamp=1509973373689, txnLastUpdateTimestamp=1509973376190) with coordinator epoch 2 for my-Txn-Id succeeded (kafka.coordinator.transaction.TransactionStateManager) [2017-11-06 18:32:56,193] DEBUG TransactionalId my-Txn-Id prepare transition from PrepareCommit to TxnTransitMetadata(producerId=3004, producerEpoch=0, txnTimeoutMs=6, txnState=CompleteCommit, topicPartitions=Set(), txnStartTimestamp=1509973373689, txnLastUpdateTimestamp=1509973376192) (kafka.coordinator.transaction.TransactionMetadata) [2017-11-06 18:32:56,193] DEBUG TransactionalId my-Txn-Id prepare transition from PrepareCommit to TxnTransitMetadata(producerId=3004, producerEpoch=0, txnTimeoutMs=6, txnState=CompleteCommit, topicPartitions=Set(), txnStartTimestamp=1509973373689, txnLastUpdateTimestamp=1509973376192) (kafka.coordinator.transaction.TransactionMetadata) [2017-11-06 18:32:56,194] DEBUG Connection with /192.168.10.73 disconnected (org.apache.kafka.common.network.Selector) java.io.EOFException at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:87) at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:75) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:203) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:167) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:381) at org.apache.kafka.common.network.Selector.poll(Selector.java:326) at kafka.network.Processor.poll(SocketServer.scala:500) at kafka.network.Processor.run(SocketServer.scala:435) at java.lang.Thread.run(Thread.java:748) [2017-11-06 18:32:56,194] DEBUG Connection with /127.0.0.1 disconnected (org.apache.kafka.common.network.Selector) java.io.EOFException at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:87) at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:75) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:203) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:167) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:381) at org.apache.kafka.common.network.Selector.poll(Selector.java:326) at kafka.network.Processor.poll(SocketServer.scala:500) at kafka.network.Processor.run(SocketServer.scala:435) at java.lang.Thread.run(Thread.java:748) [2017-11-06 18:32:56,238] DEBUG Searching offset for timestamp -2 (kafka.log.Log) was (Author: vmallavarapu): Hi All, We are also facing the same issue in windows but not tested in linux. Attached the log files for reference. It is observed that there is an exception pasted as below while committing the transaction. Your quick help in this regard really helpful to migrate Kafka to 0.11.0.1. > Kafka Transactional Messaging does not work on windows but
[jira] [Commented] (KAFKA-6153) Kafka Transactional Messaging does not work on windows but on linux
[ https://issues.apache.org/jira/browse/KAFKA-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16240251#comment-16240251 ] Veera commented on KAFKA-6153: -- Hi All, We are also facing the same issue in windows but not tested in linux. Attached the log files for reference. It is observed that there is an exception pasted as below while committing the transaction. Your quick help in this regard really helpful to migrate Kafka to 0.11.0.1. > Kafka Transactional Messaging does not work on windows but on linux > --- > > Key: KAFKA-6153 > URL: https://issues.apache.org/jira/browse/KAFKA-6153 > Project: Kafka > Issue Type: Bug > Components: consumer, producer >Affects Versions: 0.11.0.1 >Reporter: Changhai Han >Priority: Critical > > As mentioned in title, the kafka transaction messaging does not work on > windows but on linux. > The code is like below: > {code:java} > stringProducer.initTransactions(); > while(true){ > ConsumerRecordsrecords = > stringConsumer.poll(2000); > if(!records.isEmpty()){ > stringProducer.beginTransaction(); > try{ > for(ConsumerRecord record : records){ > LOGGER.info(record.value().toString()); > stringProducer.send(new ProducerRecord String>("kafka-test-out", record.value().toString())); > } > stringProducer.commitTransaction(); > }catch (ProducerFencedException e){ > LOGGER.warn(e.getMessage()); > stringProducer.close(); > stringConsumer.close(); > }catch (KafkaException e){ > LOGGER.warn(e.getMessage()); > stringProducer.abortTransaction(); > } > } > } > {code} > When I debug it, it seems to it stuck on committing the transaction. Does > anyone also experience the same thing? Is there any specific configs that i > need to add in the producer config? Thanks. -- This message was sent by Atlassian JIRA (v6.4.14#64029)